1,273 research outputs found

    Self-Supervised Feature Learning by Learning to Spot Artifacts

    Full text link
    We introduce a novel self-supervised learning method based on adversarial training. Our objective is to train a discriminator network to distinguish real images from images with synthetic artifacts, and then to extract features from its intermediate layers that can be transferred to other data domains and tasks. To generate images with artifacts, we pre-train a high-capacity autoencoder and then we use a damage and repair strategy: First, we freeze the autoencoder and damage the output of the encoder by randomly dropping its entries. Second, we augment the decoder with a repair network, and train it in an adversarial manner against the discriminator. The repair network helps generate more realistic images by inpainting the dropped feature entries. To make the discriminator focus on the artifacts, we also make it predict what entries in the feature were dropped. We demonstrate experimentally that features learned by creating and spotting artifacts achieve state of the art performance in several benchmarks.Comment: CVPR 2018 (spotlight

    Remembering Liberal Feminism in Radical Ways: Locating Conservative Strategies in the Narratives of Dr. Christina Hoff Sommers, Tammy Bruce, and Dr. Laura Schlessinger

    Get PDF
    This dissertation identifies and challenges post-feminist narratives that remember the second wave or 1960s and 1970s liberal feminism as a radical form of activism. The narratives of three prominent post-feminist authors: Dr. Christina Hoff Sommers, Tammy Bruce and Dr. Laura Schlessinger are used as examples of how identification works as a rhetorical device that motivates individual actors to join in a struggle against liberal and radical feminist ideologies. I argue that each author draws on classically liberal and politically conservative virtues to define a true feminism that is at odds with alternative feminist commitments. I demonstrate how these authors create a subject position of a true feminist that is reminiscent of the classically liberal suffragist. In Burkean terms, each author constitutes the suffragist as a friend and juxtaposes her with the enemy--modern liberal and radical feminists. I articulate the consequences of such dialectical portrayals of feminist activism and further suggest that these authors\u27 visions of feminism reinforce patriarchal practices, urging women to assimilate into a classically liberal society at the cost of social justice. In opposition to their memories of feminism, I offer a radical democratic approach of remembering feminism that is less concerned with the definition of feminism or feminist than it is with holistically addressing oppression and what oppression means to subjugated populations

    Learning Generalizable Visual Patterns Without Human Supervision

    Get PDF
    Owing to the existence of large labeled datasets, Deep Convolutional Neural Networks have ushered in a renaissance in computer vision. However, almost all of the visual data we generate daily - several human lives worth of it - remains unlabeled and thus out of reach of today’s dominant supervised learning paradigm. This thesis focuses on techniques that steer deep models towards learning generalizable visual patterns without human supervision. Our primary tool in this endeavor is the design of Self-Supervised Learning tasks, i.e., pretext-tasks for which labels do not involve human labor. Besides enabling the learning from large amounts of unlabeled data, we demonstrate how self-supervision can capture relevant patterns that supervised learning largely misses. For example, we design learning tasks that learn deep representations capturing shape from images, motion from video, and 3D pose features from multi-view data. Notably, these tasks’ design follows a common principle: The recognition of data transformations. The strong performance of the learned representations on downstream vision tasks such as classification, segmentation, action recognition, or pose estimation validate this pretext-task design. This thesis also explores the use of Generative Adversarial Networks (GANs) for unsupervised representation learning. Besides leveraging generative adversarial learning to define image transformation for self-supervised learning tasks, we also address training instabilities of GANs through the use of noise. While unsupervised techniques can significantly reduce the burden of supervision, in the end, we still rely on some annotated examples to fine-tune learned representations towards a target task. To improve the learning from scarce or noisy labels, we describe a supervised learning algorithm with improved generalization in these challenging settings

    Teaching Communication Activism: Communication Education for Social Justice

    Get PDF
    Teaching Communication Activism: Communication Education for Social Justice provides an innovative account of activist teaching. It is an excellent read for instructors navigating an increasingly overburdened and underfunded academic environment. The contributors of this edited volume provide inspirational accounts of social activism in the classroom. Teaching Communication Activism is a hopeful compilation with the potential to reinvigorate higher education by reuniting theory with practice and celebrating the cacophonous melody of multiple voices

    Imperfect pseudo-merohedral twinning in crystals of fungal fatty acid synthase

    Get PDF
    A case of imperfect pseudo-merohedral twinning in monoclinic crystals of fungal fatty acid synthase is discussed. A space-group transition during crystal dehydration resulted in a Moiré pattern-like interference of the twinned diffraction patterns

    Audio-Visual Contrastive Learning with Temporal Self-Supervision

    Full text link
    We propose a self-supervised learning approach for videos that learns representations of both the RGB frames and the accompanying audio without human supervision. In contrast to images that capture the static scene appearance, videos also contain sound and temporal scene dynamics. To leverage the temporal and aural dimension inherent to videos, our method extends temporal self-supervision to the audio-visual setting and integrates it with multi-modal contrastive objectives. As temporal self-supervision, we pose playback speed and direction recognition in both modalities and propose intra- and inter-modal temporal ordering tasks. Furthermore, we design a novel contrastive objective in which the usual pairs are supplemented with additional sample-dependent positives and negatives sampled from the evolving feature space. In our model, we apply such losses among video clips and between videos and their temporally corresponding audio clips. We verify our model design in extensive ablation experiments and evaluate the video and audio representations in transfer experiments to action recognition and retrieval on UCF101 and HMBD51, audio classification on ESC50, and robust video fingerprinting on VGG-Sound, with state-of-the-art results.Comment: AAAI-2

    Spatio-Temporal Crop Aggregation for Video Representation Learning

    Full text link
    We propose Spatio-temporal Crop Aggregation for video representation LEarning (SCALE), a novel method that enjoys high scalability at both training and inference time. Our model builds long-range video features by learning from sets of video clip-level features extracted with a pre-trained backbone. To train the model, we propose a self-supervised objective consisting of masked clip feature prediction. We apply sparsity to both the input, by extracting a random set of video clips, and to the loss function, by only reconstructing the sparse inputs. Moreover, we use dimensionality reduction by working in the latent space of a pre-trained backbone applied to single video clips. These techniques make our method not only extremely efficient to train but also highly effective in transfer learning. We demonstrate that our video representation yields state-of-the-art performance with linear, non-linear, and KNN probing on common action classification and video understanding datasets

    Learning to Deblur and Rotate Motion-Blurred Faces

    Get PDF
    We propose a solution to the novel task of rendering sharp videos from new viewpoints from a single motion-blurred image of a face. Our method1 handles the complexity of face blur by implicitly learning the geometry and motion of faces through the joint training on three large datasets: FFHQ and 300VW, which are publicly available, and a new Bern Multi-View Face Dataset (BMFD) that we built. The first two datasets provide a large variety of faces and allow our model to generalize better. BMFD instead allows us to introduce multi-view constraints, which are crucial to synthesizing sharp videos from a new camera view. It consists of high frame rate synchronized videos from multiple views of several subjects displaying a wide range of facial expressions. We use the high frame rate videos to simulate realistic motion blur through averaging. Thanks to this dataset, we train a neural network to reconstruct a 3D video representation from a single image and the corresponding face gaze. We then provide a camera viewpoint relative to the estimated gaze and the blurry image as input to an encoder-decoder network to generate a video of sharp frames with a novel camera viewpoint. We demonstrate our approach on test subjects of our multi-view dataset and VIDTIMIT

    Changes in Length of Grandparenthood in Finland 1790-1959

    Get PDF
    The importance of grandparents for their grandchildren is well-studied in several disciplines, and studies are now also addressing the potential effects of grandchildren on grandparental wellbeing. Any such effects are limited by the time grandparents share with their grandchildren. Changing child mortality rates, grandparental longevity, and childbearing patterns may have profoundly altered the length of grandparenthood across the demographic transition, but this has received little scientific attention. Using a genealogical dataset from Finland, we investigate changes in this shared time, from the late 18th to mid-20th century. We found the number of shared years between grandparents and grandchildren was low until roughly the onset of industrialisation in Finland, after which point shared time increased rapidly, from both the grandchild and grandparent perspectives. Understanding changing patterns in the opportunity for intergenerational transfers between grandparents and grandchildren has implications for several fields of study, including biology, demography, sociology, health studies, and economics
    • …
    corecore